minimax objective
Supplementary Information (SI) A Spiking dynamics as a greedy optimization algorithm on the minimax objective
By plugging in Eq. (4), we have X Next, we derive the dynamics of the membrane potential. For the E neurons, we proceed similarly. We cite a theorem from [46]. We apply Thm. 1 to our minimax objective, for the maximization problem with The last two terms are related to nonlinear neural activations. Next we show that the energy function is decreasing.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > Massachusetts > Middlesex County > Natick (0.04)
- North America > United States > California > Alameda County > Berkeley (0.04)
- (3 more...)
33cf42b38bbcf1dd6ba6b0f0cd005328-AuthorFeedback.pdf
We agree that our discussion of Seung et al. was not1 sufficient and we will address this inafuture version. We kindly ask the reviewer to reconsider the following contributions. Thenotion ofbalanced networks isnot6 considered at all in Seung et al., but is an important phenomenon in neuroscience (Poo et al, 2009, Rupprecht et al,7 2018). Using ourframework,weconstructed balanced spiking neural networksforsolving various11 tasks (reconstruction, fixed point and manifold attractor dynamics) heavily studied in neuroscience. The attractors include specific inhibitory neurons and the network weights are pretrained20 giventhesespecific EandIneurons.
9 SupplementaryMaterial
We use the default train/validation split from nuScenes. All numbers reported in the paper are on the validation split ofnuScenes. Adapt Version (Images) To solve the minimax objective in a single forward-backward pass we usethegradient-reversal layer GRL andwarm-up schedule from [16,15]. Adapt Version (Lidar) Similar to the image version, we solve the minimax objective in a single forward-backward pass using the gradient-reversal layer GRL and the warm-up schedule from [16,15]. We remove the rotation operation from the augmentation pool.
Minimax Dynamics of Optimally Balanced Spiking Networks of Excitatory and Inhibitory Neurons
Excitation-inhibition balance is ubiquitously observed in the cortex. Recent studies suggest an intriguing link between balance on fast timescales, tight balance, and efficient information coding with spikes. We further this connection by taking a principled approach to optimal balanced networks of excitatory (E) and inhibitory(I) neurons. By deriving E-I spiking neural networks from greedy spike-based optimizations of constrained minimax objectives, we show that tight balance arises from correcting for deviations from the minimax optimum. We predict specific neuron firing rates in the networks by solving the minimax problems, going beyond statistical theories of balanced networks. We design minimax objectives for reconstruction of an input signal, associative memory, and storage of manifold attractors, and derive from them E-I networks that perform the computation. Overall, we present a novel normative modeling approach for spiking E-I networks, going beyond the widely-used energy-minimizing networks that violate Dale's law. Our networks can be used to model cortical circuits and computations.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > Massachusetts > Middlesex County > Natick (0.04)
- North America > United States > California > Alameda County > Berkeley (0.04)
- (3 more...)
33cf42b38bbcf1dd6ba6b0f0cd005328-AuthorFeedback.pdf
We thank the reviewer for the thorough review. We agree that our discussion of Seung et al. was not However, our contributions go beyond Seung et al.'s work in We kindly ask the reviewer to reconsider the following contributions. Such applications were not available in Seung et al. We indeed applied the results in Seung et al. as a tool to provide necessary conditions of convergence of the dynamics, Reviewer 2: We thank the reviewer for the enthusiastic support! We will provide details in the appendix. Minimax objectives: We thank the author for the inspiring question.
Minimax Dynamics of Optimally Balanced Spiking Networks of Excitatory and Inhibitory Neurons
Excitation-inhibition balance is ubiquitously observed in the cortex. Recent studies suggest an intriguing link between balance on fast timescales, tight balance, and efficient information coding with spikes. We further this connection by taking a principled approach to optimal balanced networks of excitatory (E) and inhibitory(I) neurons. By deriving E-I spiking neural networks from greedy spike-based optimizations of constrained minimax objectives, we show that tight balance arises from correcting for deviations from the minimax optimum. We predict specific neuron firing rates in the networks by solving the minimax problems, going beyond statistical theories of balanced networks.
Train simultaneously, generalize better: Stability of gradient-based minimax learners
Farnia, Farzan, Ozdaglar, Asuman
The success of minimax learning problems of generative adversarial networks (GANs) has been observed to depend on the minimax optimization algorithm used for their training. This dependence is commonly attributed to the convergence speed and robustness properties of the underlying optimization algorithm. In this paper, we show that the optimization algorithm also plays a key role in the generalization performance of the trained minimax model. To this end, we analyze the generalization properties of standard gradient descent ascent (GDA) and proximal point method (PPM) algorithms through the lens of algorithmic stability under both convex concave and non-convex non-concave minimax settings. While the GDA algorithm is not guaranteed to have a vanishing excess risk in convex concave problems, we show the PPM algorithm enjoys a bounded excess risk in the same setup. For non-convex non-concave problems, we compare the generalization performance of stochastic GDA and GDmax algorithms where the latter fully solves the maximization subproblem at every iteration. Our generalization analysis suggests the superiority of GDA provided that the minimization and maximization subproblems are solved simultaneously with similar learning rates. We discuss several numerical results indicating the role of optimization algorithms in the generalization of the learned minimax models.
- Asia > Middle East > Jordan (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Search (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Optimization (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning > Gradient Descent (0.67)